Hotel guest
Credit: Getty Images / Unsplash

Hackers steal hotel guests’ payment data in new AI-driven campaign

A hacker group known as RevengeHotels is using artificial intelligence to boost its attacks on hotels in Brazil and elsewhere, researchers have found.

RevengeHotels has been active since 2015 and focuses on stealing payment card data from hotel guests and front-desk systems. The group’s latest campaigns rely on phishing emails disguised as invoices or job applications to trick staff into opening malicious attachments, according to a report by Russian cybersecurity firm Kaspersky.

During the attacks, the hackers deliver a remote access trojan, VenomRAT, capable of stealing files and controlling infected computers. VenomRAT, which sells for up to $650 on underground forums, is an evolution of the open-source QuasarRAT and offers functions such as credential theft and data exfiltration.

Kaspersky said much of the malicious code used in recent attacks appeared to have been generated with the help of large language models (LLMs), allowing the hackers to produce cleaner, more structured code with detailed comments. 

“This suggests that the threat actor is now leveraging AI to evolve its capabilities, a trend also reported among other cybercriminal groups,” the firm said.

While Brazil remains RevengeHotels’ primary target, Spanish-language phishing emails indicate the group is also going after hotels and tourism companies in countries such as Mexico, Argentina, Chile, Costa Rica and Spain. Previous campaigns have also struck hotels in Russia, Belarus and Turkey.

Kaspersky added that the attackers are rotating domains and payloads frequently to evade detection, but their ultimate goal remains the same: compromising hotel systems to harvest sensitive data from travelers worldwide.

Hacker groups are increasingly turning to artificial intelligence to make their attacks more effective. In a separate report this week, cybersecurity firm Genians said that North Korean hackers exploited OpenAI’s ChatGPT to generate deepfake military ID cards in a phishing campaign against South Korean defense-related institutions.

In a June report, OpenAI said state-backed threat actors from several countries are now using ChatGPT for illicit purposes ranging from malware refinement to employment scams and social media disinformation campaigns.

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
Recorded Future
No previous article
No new articles
Daryna Antoniuk

Daryna Antoniuk

is a reporter for Recorded Future News based in Ukraine. She writes about cybersecurity startups, cyberattacks in Eastern Europe and the state of the cyberwar between Ukraine and Russia. She previously was a tech reporter for Forbes Ukraine. Her work has also been published at Sifted, The Kyiv Independent and The Kyiv Post.